principal component vector
Rethinking the impact of noisy labels in graph classification: A utility and privacy perspective
Li, De, Li, Xianxian, Gan, Zeming, Li, Qiyu, Qu, Bin, Wang, Jinyan
Graph neural networks based on message-passing mechanisms have achieved advanced results in graph classification tasks. However, their generalization performance degrades when noisy labels are present in the training data. Most existing noisy labeling approaches focus on the visual domain or graph node classification tasks and analyze the impact of noisy labels only from a utility perspective. Unlike existing work, in this paper, we measure the effects of noise labels on graph classification from data privacy and model utility perspectives. We find that noise labels degrade the model's generalization performance and enhance the ability of membership inference attacks on graph data privacy. To this end, we propose the robust graph neural network approach with noisy labeled graph classification. Specifically, we first accurately filter the noisy samples by high-confidence samples and the first feature principal component vector of each class. Then, the robust principal component vectors and the model output under data augmentation are utilized to achieve noise label correction guided by dual spatial information. Finally, supervised graph contrastive learning is introduced to enhance the embedding quality of the model and protect the privacy of the training graph data. The utility and privacy of the proposed method are validated by comparing twelve different methods on eight real graph classification datasets. Compared with the state-of-the-art methods, the RGLC method achieves at most and at least 7.8% and 0.8% performance gain at 30% noisy labeling rate, respectively, and reduces the accuracy of privacy attacks to below 60%.
Face Recognition using Principal Component Analysis
Recent advance in machine learning has made face recognition not a difficult problem. But in the previous, researchers have made various attempts and developed various skills to make computer capable of identifying people. One of the early attempt with moderate success is eigenface, which is based on linear algebra techniques. In this tutorial, we will see how we can build a primitive face recognition system with some simple linear algebra technique such as principal component analysis. Face Recognition using Principal Component Analysis Photo by Rach Teo, some rights reserved.
Discriminant analysis based on projection onto generalized difference subspace
Fukui, Kazuhiro, Sogi, Naoya, Kobayashi, Takumi, Xue, Jing-Hao, Maki, Atsuto
This paper discusses a new type of discriminant analysis based on the orthogonal projection of data onto a generalized difference subspace (GDS). In our previous work, we have demonstrated that GDS projection works as the quasi-orthogonalization of class subspaces, which is an effective feature extraction for subspace based classifiers. Interestingly, GDS projection also works as a discriminant feature extraction through a similar mechanism to the Fisher discriminant analysis (FDA). A direct proof of the connection between GDS projection and FDA is difficult due to the significant difference in their formulations. To avoid the difficulty, we first introduce geometrical Fisher discriminant analysis (gFDA) based on a simplified Fisher criterion. Our simplified Fisher criterion is derived from a heuristic yet practically plausible principle: the direction of the sample mean vector of a class is in most cases almost equal to that of the first principal component vector of the class, under the condition that the principal component vectors are calculated by applying the principal component analysis (PCA) without data centering. gFDA can work stably even under few samples, bypassing the small sample size (SSS) problem of FDA. Next, we prove that gFDA is equivalent to GDS projection with a small correction term. This equivalence ensures GDS projection to inherit the discriminant ability from FDA via gFDA. Furthermore, to enhance the performances of gFDA and GDS projection, we normalize the projected vectors on the discriminant spaces. Extensive experiments using the extended Yale B+ database and the CMU face database show that gFDA and GDS projection have equivalent or better performance than the original FDA and its extensions.
A categorisation and implementation of digital pen features for behaviour characterisation
Prange, Alexander, Barz, Michael, Sonntag, Daniel
The research described in this paper is motivated by the development of applications for the behaviour analysis of handwriting and sketch input. Our goal is to provide other researchers with a reproducible, categorised set of features that can be used for behaviour characterisation in different scenarios. We use the term feature to describe properties of strokes and gestures which can be calculated based on the raw sensor input from capture devices, such as digital pens or tablets. In this paper, a large number of features known from the literature are presented and categorised into different subsets.
- North America > United States > New York > New York County > New York City (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Research Report (0.70)
- Questionnaire & Opinion Survey (0.46)
Self-Organizing Rules for Robust Principal Component Analysis
Using statistical physicstechniques including the Gibbs distribution, binary decision fields and effective energies, we propose self-organizing PCA rules which are capable of resisting outliers while fulfilling various PCA-related tasks such as obtaining the first principal component vector,the first k principal component vectors, and directly finding the subspace spanned by the first k vector principal component vectorswithout solving for each vector individually. Comparative experimentshave shown that the proposed robust rules improve the performances of the existing PCA algorithms significantly whenoutliers are present.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- Asia > Singapore (0.04)
- Asia > China > Beijing > Beijing (0.04)
Self-Organizing Rules for Robust Principal Component Analysis
Principal Component Analysis (PCA) is an essential technique for data compression and feature extraction, and has been widely used in statistical data analysis, communication theory, pattern recognition and image processing. In the neural network literature, a lot of studies have been made on learning rules for implementing PCA or on networks closely related to PCA (see Xu & Yuille, 1993 for a detailed reference list which contains more than 30 papers related to these issues).
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- Asia > Singapore (0.04)
- Asia > China > Beijing > Beijing (0.04)
Self-Organizing Rules for Robust Principal Component Analysis
Principal Component Analysis (PCA) is an essential technique for data compression and feature extraction, and has been widely used in statistical data analysis, communication theory, pattern recognition and image processing. In the neural network literature, a lot of studies have been made on learning rules for implementing PCA or on networks closely related to PCA (see Xu & Yuille, 1993 for a detailed reference list which contains more than 30 papers related to these issues).
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- Asia > Singapore (0.04)
- Asia > China > Beijing > Beijing (0.04)